Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
1.
Front Immunol ; 13: 976512, 2022.
Article in English | MEDLINE | ID: covidwho-2320841

ABSTRACT

COVID-19 prognoses suggests that a proportion of patients develop fibrosis, but there is no evidence to indicate whether patients have progression of mesenchymal transition (MT) in the lungs. The role of MT during the COVID-19 pandemic remains poorly understood. Using single-cell RNA sequencing, we profiled the transcriptomes of cells from the lungs of healthy individuals (n = 45), COVID-19 patients (n = 58), and idiopathic pulmonary fibrosis (IPF) patients (n = 64) human lungs to map the entire MT change. This analysis enabled us to map all high-resolution matrix-producing cells and identify distinct subpopulations of endothelial cells (ECs) and epithelial cells as the primary cellular sources of MT clusters during COVID-19. For the first time, we have identied early and late subgroups of endothelial mesenchymal transition (EndMT) and epithelial-mesenchymal transition (EMT) using analysis of public databases for single-cell sequencing. We assessed epithelial subgroups by age, smoking status, and gender, and the data suggest that the proportional changes in EMT in COVID-19 are statistically significant. Further enumeration of early and late EMT suggests a correlation between invasive genes and COVID-19. Finally, EndMT is upregulated in COVID-19 patients and enriched for more inflammatory cytokines. Further, by classifying EndMT as early or late stages, we found that early EndMT was positively correlated with entry factors but this was not true for late EndMT. Exploring the MT state of may help to mitigate the fibrosis impact of SARS-CoV-2 infection.


Subject(s)
COVID-19 , Epithelial-Mesenchymal Transition , Cytokines , Endothelial Cells/pathology , Epithelial-Mesenchymal Transition/genetics , Fibrosis , Humans , Pandemics , SARS-CoV-2 , Signal Transduction
2.
arxiv; 2023.
Preprint in English | PREPRINT-ARXIV | ID: ppzbmed-2303.07067v1

ABSTRACT

Federated learning (FL) aided health diagnostic models can incorporate data from a large number of personal edge devices (e.g., mobile phones) while keeping the data local to the originating devices, largely ensuring privacy. However, such a cross-device FL approach for health diagnostics still imposes many challenges due to both local data imbalance (as extreme as local data consists of a single disease class) and global data imbalance (the disease prevalence is generally low in a population). Since the federated server has no access to data distribution information, it is not trivial to solve the imbalance issue towards an unbiased model. In this paper, we propose FedLoss, a novel cross-device FL framework for health diagnostics. Here the federated server averages the models trained on edge devices according to the predictive loss on the local data, rather than using only the number of samples as weights. As the predictive loss better quantifies the data distribution at a device, FedLoss alleviates the impact of data imbalance. Through a real-world dataset on respiratory sound and symptom-based COVID-$19$ detection task, we validate the superiority of FedLoss. It achieves competitive COVID-$19$ detection performance compared to a centralised model with an AUC-ROC of $79\%$. It also outperforms the state-of-the-art FL baselines in sensitivity and convergence speed. Our work not only demonstrates the promise of federated COVID-$19$ detection but also paves the way to a plethora of mobile health model development in a privacy-preserving fashion.


Subject(s)
COVID-19
3.
Frontiers in immunology ; 13, 2022.
Article in English | EuropePMC | ID: covidwho-2074008

ABSTRACT

COVID-19 prognoses suggests that a proportion of patients develop fibrosis, but there is no evidence to indicate whether patients have progression of mesenchymal transition (MT) in the lungs. The role of MT during the COVID-19 pandemic remains poorly understood. Using single-cell RNA sequencing, we profiled the transcriptomes of cells from the lungs of healthy individuals (n = 45), COVID-19 patients (n = 58), and idiopathic pulmonary fibrosis (IPF) patients (n = 64) human lungs to map the entire MT change. This analysis enabled us to map all high-resolution matrix-producing cells and identify distinct subpopulations of endothelial cells (ECs) and epithelial cells as the primary cellular sources of MT clusters during COVID-19. For the first time, we have identied early and late subgroups of endothelial mesenchymal transition (EndMT) and epithelial-mesenchymal transition (EMT) using analysis of public databases for single-cell sequencing. We assessed epithelial subgroups by age, smoking status, and gender, and the data suggest that the proportional changes in EMT in COVID-19 are statistically significant. Further enumeration of early and late EMT suggests a correlation between invasive genes and COVID-19. Finally, EndMT is upregulated in COVID-19 patients and enriched for more inflammatory cytokines. Further, by classifying EndMT as early or late stages, we found that early EndMT was positively correlated with entry factors but this was not true for late EndMT. Exploring the MT state of may help to mitigate the fibrosis impact of SARS-CoV-2 infection.

4.
arxiv; 2022.
Preprint in English | PREPRINT-ARXIV | ID: ppzbmed-2202.08981v1

ABSTRACT

The COVID-19 pandemic has caused massive humanitarian and economic damage. Teams of scientists from a broad range of disciplines have searched for methods to help governments and communities combat the disease. One avenue from the machine learning field which has been explored is the prospect of a digital mass test which can detect COVID-19 from infected individuals' respiratory sounds. We present a summary of the results from the INTERSPEECH 2021 Computational Paralinguistics Challenges: COVID-19 Cough, (CCS) and COVID-19 Speech, (CSS).


Subject(s)
COVID-19
5.
arxiv; 2022.
Preprint in English | PREPRINT-ARXIV | ID: ppzbmed-2201.01232v2

ABSTRACT

Recent work has shown the potential of using audio data (eg, cough, breathing, and voice) in the screening for COVID-19. However, these approaches only focus on one-off detection and detect the infection given the current audio sample, but do not monitor disease progression in COVID-19. Limited exploration has been put forward to continuously monitor COVID-19 progression, especially recovery, through longitudinal audio data. Tracking disease progression characteristics could lead to more timely treatment. The primary objective of this study is to explore the potential of longitudinal audio samples over time for COVID-19 progression prediction and, especially, recovery trend prediction using sequential deep learning techniques. Crowdsourced respiratory audio data, including breathing, cough, and voice samples, from 212 individuals over 5-385 days were analyzed. We developed a deep learning-enabled tracking tool using gated recurrent units (GRUs) to detect COVID-19 progression by exploring the audio dynamics of the individuals' historical audio biomarkers. The investigation comprised 2 parts: (1) COVID-19 detection in terms of positive and negative (healthy) tests, and (2) longitudinal disease progression prediction over time in terms of probability of positive tests. The strong performance for COVID-19 detection, yielding an AUROC of 0.79, a sensitivity of 0.75, and a specificity of 0.71 supported the effectiveness of the approach compared to methods that do not leverage longitudinal dynamics. We further examined the predicted disease progression trajectory, displaying high consistency with test results with a correlation of 0.75 in the test cohort and 0.86 in a subset of the test cohort who reported recovery. Our findings suggest that monitoring COVID-19 evolution via longitudinal audio data has potential in the tracking of individuals' disease progression and recovery.


Subject(s)
COVID-19
6.
arxiv; 2021.
Preprint in English | PREPRINT-ARXIV | ID: ppzbmed-2106.15523v1

ABSTRACT

Researchers have been battling with the question of how we can identify Coronavirus disease (COVID-19) cases efficiently, affordably and at scale. Recent work has shown how audio based approaches, which collect respiratory audio data (cough, breathing and voice) can be used for testing, however there is a lack of exploration of how biases and methodological decisions impact these tools' performance in practice. In this paper, we explore the realistic performance of audio-based digital testing of COVID-19. To investigate this, we collected a large crowdsourced respiratory audio dataset through a mobile app, alongside recent COVID-19 test result and symptoms intended as a ground truth. Within the collected dataset, we selected 5,240 samples from 2,478 participants and split them into different participant-independent sets for model development and validation. Among these, we controlled for potential confounding factors (such as demographics and language). The unbiased model takes features extracted from breathing, coughs, and voice signals as predictors and yields an AUC-ROC of 0.71 (95\% CI: 0.65$-$0.77). We further explore different unbalanced distributions to show how biases and participant splits affect performance. Finally, we discuss how the realistic model presented could be integrated in clinical practice to realize continuous, ubiquitous, sustainable and affordable testing at population scale.


Subject(s)
COVID-19 , Coronavirus Infections
7.
arxiv; 2021.
Preprint in English | PREPRINT-ARXIV | ID: ppzbmed-2104.02005v2

ABSTRACT

Recently, sound-based COVID-19 detection studies have shown great promise to achieve scalable and prompt digital pre-screening. However, there are still two unsolved issues hindering the practice. First, collected datasets for model training are often imbalanced, with a considerably smaller proportion of users tested positive, making it harder to learn representative and robust features. Second, deep learning models are generally overconfident in their predictions. Clinically, false predictions aggravate healthcare costs. Estimation of the uncertainty of screening would aid this. To handle these issues, we propose an ensemble framework where multiple deep learning models for sound-based COVID-19 detection are developed from different but balanced subsets from original data. As such, data are utilized more effectively compared to traditional up-sampling and down-sampling approaches: an AUC of 0.74 with a sensitivity of 0.68 and a specificity of 0.69 is achieved. Simultaneously, we estimate uncertainty from the disagreement across multiple models. It is shown that false predictions often yield higher uncertainty, enabling us to suggest the users with certainty higher than a threshold to repeat the audio test on their phones or to take clinical tests if digital diagnosis still fails. This study paves the way for a more robust sound-based COVID-19 automated screening system.


Subject(s)
COVID-19
8.
arxiv; 2021.
Preprint in English | PREPRINT-ARXIV | ID: ppzbmed-2102.13468v1

ABSTRACT

The INTERSPEECH 2021 Computational Paralinguistics Challenge addresses four different problems for the first time in a research competition under well-defined conditions: In the COVID-19 Cough and COVID-19 Speech Sub-Challenges, a binary classification on COVID-19 infection has to be made based on coughing sounds and speech; in the Escalation SubChallenge, a three-way assessment of the level of escalation in a dialogue is featured; and in the Primates Sub-Challenge, four species vs background need to be classified. We describe the Sub-Challenges, baseline feature extraction, and classifiers based on the 'usual' COMPARE and BoAW features as well as deep unsupervised representation learning using the AuDeep toolkit, and deep feature extraction from pre-trained CNNs using the Deep Spectrum toolkit; in addition, we add deep end-to-end sequential modelling, and partially linguistic analysis.


Subject(s)
COVID-19
9.
arxiv; 2021.
Preprint in English | PREPRINT-ARXIV | ID: ppzbmed-2102.05225v1

ABSTRACT

The development of fast and accurate screening tools, which could facilitate testing and prevent more costly clinical tests, is key to the current pandemic of COVID-19. In this context, some initial work shows promise in detecting diagnostic signals of COVID-19 from audio sounds. In this paper, we propose a voice-based framework to automatically detect individuals who have tested positive for COVID-19. We evaluate the performance of the proposed framework on a subset of data crowdsourced from our app, containing 828 samples from 343 participants. By combining voice signals and reported symptoms, an AUC of $0.79$ has been attained, with a sensitivity of $0.68$ and a specificity of $0.82$. We hope that this study opens the door to rapid, low-cost, and convenient pre-screening tools to automatically detect the disease.


Subject(s)
COVID-19
10.
arxiv; 2021.
Preprint in English | PREPRINT-ARXIV | ID: ppzbmed-2102.08251v1

ABSTRACT

The recent outbreak of COVID-19 poses a serious threat to people's lives. Epidemic control strategies have also caused damage to the economy by cutting off humans' daily commute. In this paper, we develop an Individual-based Reinforcement Learning Epidemic Control Agent (IDRLECA) to search for smart epidemic control strategies that can simultaneously minimize infections and the cost of mobility intervention. IDRLECA first hires an infection probability model to calculate the current infection probability of each individual. Then, the infection probabilities together with individuals' health status and movement information are fed to a novel GNN to estimate the spread of the virus through human contacts. The estimated risks are used to further support an RL agent to select individual-level epidemic-control actions. The training of IDRLECA is guided by a specially designed reward function considering both the cost of mobility intervention and the effectiveness of epidemic control. Moreover, we design a constraint for control-action selection that eases its difficulty and further improve exploring efficiency. Extensive experimental results demonstrate that IDRLECA can suppress infections at a very low level and retain more than 95% of human mobility.


Subject(s)
COVID-19
11.
arxiv; 2020.
Preprint in English | PREPRINT-ARXIV | ID: ppzbmed-2006.05919v3

ABSTRACT

Audio signals generated by the human body (e.g., sighs, breathing, heart, digestion, vibration sounds) have routinely been used by clinicians as indicators to diagnose disease or assess disease progression. Until recently, such signals were usually collected through manual auscultation at scheduled visits. Research has now started to use digital technology to gather bodily sounds (e.g., from digital stethoscopes) for cardiovascular or respiratory examination, which could then be used for automatic analysis. Some initial work shows promise in detecting diagnostic signals of COVID-19 from voice and coughs. In this paper we describe our data analysis over a large-scale crowdsourced dataset of respiratory sounds collected to aid diagnosis of COVID-19. We use coughs and breathing to understand how discernible COVID-19 sounds are from those in asthma or healthy controls. Our results show that even a simple binary machine learning classifier is able to classify correctly healthy and COVID-19 sounds. We also show how we distinguish a user who tested positive for COVID-19 and has a cough from a healthy user with a cough, and users who tested positive for COVID-19 and have a cough from users with asthma and a cough. Our models achieve an AUC of above 80% across all tasks. These results are preliminary and only scratch the surface of the potential of this type of data and audio-based machine learning. This work opens the door to further investigation of how automatically analysed respiratory patterns could be used as pre-screening signals to aid COVID-19 diagnosis.


Subject(s)
COVID-19 , Asthma , Cardiovascular Diseases
SELECTION OF CITATIONS
SEARCH DETAIL